Sep
14
Written by:
QCTO Blog
Monday, September 14, 2009
Written by Finn Ellebaek Nielsen
Introduction
In my previous blog posts I suggested various best
practices for establishing a test policy, as well as a test strategy
for Oracle code. I've also described test approaches adhering to the
policy and strategy. Before moving on to test design, I would like to
write a few words on the amount of testing required under different
circumstances.
How Much Testing Do We Need
You might ask "Where is the break-even between the
investment into testing and the return in terms of reduced risk/cost
and increased quality?" This is a difficult question to answer in
general as it's very specific to your project. However, we can break the
type of project into two generic categories:
- Medium to low risk projects
- High risk projects
In general, these two categories of projects
require different amounts of testing. This means that we can determine
the amount of testing required through the identification of the project
category.
In order to determine the category of the project,
we need to carry out a preliminary analysis of the product risk, which
is the risk associated with the software product we produce on the
project. Product risk examines the possibility that the software fails
to satisfy reasonable expectations. This can occur in many different
ways, such as:
- Key functionality missing.
- Poor reliability, unstable.
- Failure, causing financial or physical damage.
- Poor security, eg easy to break in, inject or attack for Denial-of-Service purposes.
- Poor usability.
- Poor performance.
Medium - Low Risk Projects
If your risk associated with all production defects is medium to low, I suggest you approach testing as follows:
- Legacy projects: Introducing automated tests
will be a major improvement over what you had and it will be further
improved over time. The difficult part is where to stop. If you follow
my suggestion of introducing test cases for code that needs to be
changed the problem has been reduced to determining which and how many
test cases you need.
- Greenfield projects: As previously mentioned I
suggest that you test everything on greenfield projects. So also here
the problem is "only" the amount and types of test cases.
So in fact the only difference here is the scope of the testing (specific program/subprogram or everything). The depth of the testing should be the same.
The Code Coverage (CC) threshold you've
established will guide your test case design and implementation as you
can monitor your progress towards the goal of your CC threshold; ie,
when you have tested sufficiently according to your standards. If it
then turns out that it wasn't enough because you encounter too many
defects in production, you can reassess the CC threshold or perhaps
differentiate it and increase it for specific units (eg the most central
and critical).
High Risk Projects
If your preliminary analysis revealed that the
risk is above medium you need to perform a more detailed analysis of the
product risk at hand. This can be done in a number of different ways
but most people seem to prefer that it's carried out as a workshop with a
team of various project stakeholders, testers and developers.
You can divide product risk into the following
categories: Functionality, reliability, performance, usability, security
etc. You then identify a list of risk items within each category. For
each item you agree on the following:
- Likelihood: The likelihood of this risk item
occurring. You can assign one of the following factors (some use a scale
of 1-3, some use a scale of 1-10):
- 1: Very unlikely.
- 2: Unlikely.
- 3: 50/50.
- 4: Likely.
- 5: Highly likely.
- Impact: The impact to the business, user etc
if this risk item occurs. Once again, you can differentiate with more or
fewer scales:
- 1: No loss.
- 2: Minor loss.
- 3: Some loss.
- 4: Significant loss.
- 5: Immense loss.
Based on likelihood and impact you calculate a
risk priority by multiplying them. So if you have a risk item with a
likelihood of "unlikely (2)" and an impact of "immense loss (5), the
risk priority is 2 * 5 = 10.
You can document the product risk analysis in a
spreadsheet with the following structure (you could also spread the risk
categories across worksheets making it easier to order by risk
priority):
Product Risk
|
Likelihood
|
Impact
|
Risk Priority
|
Mitigation
|
Risk Category 1
|
|
|
|
|
Risk 1
|
|
|
|
|
Risk 2
|
|
|
|
|
Risk Category 2
|
|
|
|
|
Risk 3
|
|
|
|
|
Risk 4
|
|
|
|
|
Mitigation for the risks with highest priority
could be to do the following for the program units involved:
Differentiate the CC threshold by increasing it and focus on a more
detailed code review.
The risk analysis is dynamic and should be
revisited often as you learn new things about the system and perhaps
discover new areas to be included over time.
Conclusion
It's important to remember that successful
execution of your test cases against a given Software Under Test (SUT)
doesn't prove that the SUT is free of defects. It just proves the
correctness of the behavior with the test cases you've designed and
implemented.
However, the product risk analysis and CC
threshold established drive the test design and have a direct influence
on the test amount required and implemented. This ensures that you
design and implement the correct test effort relevant to your project.
If over time the product risk changes (either
because of new knowledge or changed factors) you will need to reassess
the analysis and potentially the amount of testing may have to change.
Future Blog Posts
Future blog post will cover related issues like:
- Test design tips & tricks.
References
- Foundations of Software Testing: ISTQB
Certification by Dorothy Graham, Isabel Evans, Erik Van Veenendaal and
Rex Black, Cengage, 2008. ISBN 978-1844809899.